CS180 - Project 3: [Auto]Stiching Photo Mosaics¶

Part 2¶

B.1: Harris Corner Detection¶

We will start with Harris Interest Point Detector (Section 2 of "Multi-Image Matching using Multi-Scale Oriented Patches" by Brown et al.). We won’t worry about multi-scale and do just single scale. We also won't worry about sub-pixel accuracy.

Next we will implement Adaptive Non-Maximal Suppression (ANMS, Section 3).

No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

B.2: Feature Descriptor Extraction¶

Next we will implement Feature Descriptor extraction (Section 4 of the paper). We won't worry about rotation-invariance – just extracting axis-aligned 8x8 patches.

Note: It’s extremely important to sample these patches from the larger 40x40 window to have a nice big blurred descriptor. Don’t forget to bias/gain-normalize the descriptors. We will ignore the wavelet transform section.

No description has been provided for this image
No description has been provided for this image

B.3: Feature Matching¶

Next, we will implement Feature Matching (Section 5 of the paper). That is, we will need to find pairs of features that look similar and are thus likely to be good matches. For thresholding, we will use the simpler approach due to Lowe of thresholding on the ratio between the first and the second nearest neighbors. We will consult Figure 6b in the paper for picking the threshold (but we will ignore Section 6 of the paper).

500 and 362 edge interest points were returned in feature extraction. Matching features with a harsh threshold=0.5, reduced this to 24 pair of points (i.e. 48 points accross the two images).

No description has been provided for this image

B.4: RANSAC for Robust Homography¶

For step 4, we will use 4-point RANSAC to compute robust homography estimates. Then, produce mosaics by adapting our code from Part A.

No description has been provided for this image
Out[365]:
<matplotlib.image.AxesImage at 0x267853e7490>
No description has been provided for this image

Compared to the stitching of these two images made by manually selecting points in part 1, this version look signficantly more correct. Where before we saw ghosting tree, and slight misalignment in water levels, this version has all these things corrected. It is, however, a bit stretched out due, likely due to the significant camera rotation amount.

No description has been provided for this image

Here is the village mosiac from before run through the auto stiching pipeline. This result came out less as desired than its manual counterpart: it looks like RANSAC matched points in the field and similar grass paths more strongly than the well (which didn't have much overlap). If you didn't know what the original images looked like, this could easily fool you; edge orientation detection is very powerful!

No description has been provided for this image

And finally, here is mosaic of the lamp with the same base images as before. Unfortunateley it looks like it matched the wrong points. I wil investigate this further, but unfortuateley not before the deadline because I have 170 HW due today also. It's been real :D.